Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 65
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 6386, 2024 03 16.
Artigo em Inglês | MEDLINE | ID: mdl-38493261

RESUMO

Although there is no doubt from an empirical viewpoint that reflex mechanisms can contribute to tongue motor control in humans, there is limited neurophysiological evidence to support this idea. Previous results failing to observe any tonic stretch reflex in the tongue had reduced the likelihood of a reflex contribution in tongue motor control. The current study presents experimental evidence of a human tongue reflex in response to a sudden stretch while holding a posture for speech. The latency was relatively long (50 ms), which is possibly mediated through cortical-arc. The activation peak in a speech task was greater than in a non-speech task while background activation levels were similar in both tasks, and the peak amplitude in a speech task was not modulated by the additional task to react voluntarily to the perturbation. Computer simulations with a simplified linear mass-spring-damper model showed that the recorded muscle activation response is suited for the generation of tongue movement responses that were observed in a previous study with the appropriate timing when taking into account a possible physiological delay between reflex muscle activation and the corresponding force. Our results evidenced clearly that reflex mechanisms contribute to tongue posture stabilization for speech production.


Assuntos
Reflexo , Fala , Humanos , Eletromiografia , Equilíbrio Postural , Língua , Músculo Esquelético/fisiologia
2.
J Speech Lang Hear Res ; : 1-12, 2024 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-38497731

RESUMO

PURPOSE: Orofacial somatosensory inputs play an important role in speech motor control and speech learning. Since receiving specific auditory-somatosensory inputs during speech perceptual training alters speech perception, similar perceptual training could also alter speech production. We examined whether the production performance was changed by perceptual training with orofacial somatosensory inputs. METHOD: We focused on the French vowels /e/ and /ø/, contrasted in their articulation by horizontal gestures. Perceptual training consisted of a vowel identification task contrasting /e/ and /ø/. Along with training, for the first group of participants, somatosensory stimulation was applied as facial skin stretch in backward direction. We recorded the target vowels uttered by the participants before and after the perceptual training and compared their F1, F2, and F3 formants. We also tested a control group with no somatosensory stimulation and another somatosensory group with a different vowel continuum (/e/-/i/) for perceptual training. RESULTS: Perceptual training with somatosensory stimulation induced changes in F2 and F3 in the produced vowel sounds. F2 decreased consistently in the two somatosensory groups. F3 increased following the /e/-/ø/ training and decreased following the /e/-/i/ training. F2 change was significantly correlated with the perceptual shift between the first and second half of the training phase in the somatosensory group with the /e/-/ø/ training, but not with the /e/-/i/ training. The control group displayed no effect on F2 and F3, and just a tendency of F1 increase. CONCLUSION: The results suggest that somatosensory inputs associated to speech sound inputs can play a role in speech training and learning in both production and perception.

3.
Mod Rheumatol ; 2023 Dec 20.
Artigo em Inglês | MEDLINE | ID: mdl-38123467

RESUMO

OBJECTIVE: This study evaluated whether preoperative radiographs accurately predicted intra-articular cartilage damage in varus knees. METHODS: The study assessed 181 knees in 156 patients who underwent total knee arthroplasty. Cartilage damage was graded by two examiners with the International Cartilage Repair Society (ICRS) classification; one used knee radiographs and the other used intraoperative photographs. It was then determined if this radiographic cartilage assessment over- or underestimated the actual damage severity. Knee morphological characteristics affecting radiographic misestimation of damage severity were also identified. RESULTS: The concordance rate between radiographic and intraoperative assessments of the medial femoral condyle was high, at around 0.7. Large discrepancies were found for the lateral femoral condyle and medial trochlear groove. Radiographic assessment underestimated cartilage damage on the medial side of the lateral femoral condyle due to a large lateral tibiofemoral joint opening and severe varus alignment (both r = -0.43). Medial trochlear damage was also underdiagnosed, in cases of residual medial tibiofemoral cartilage and shallow medial tibial slope (r = -0.25 and -0.21, respectively). CONCLUSIONS: Radiographic evaluation of knee osteoarthritis was moderately practical using ICRS grades. Lateral femoral and medial trochlear cartilage damage tended to be misestimated, but considering morphologic factors might improve the diagnostic rate.

4.
Sci Rep ; 13(1): 14534, 2023 09 04.
Artigo em Inglês | MEDLINE | ID: mdl-37666917

RESUMO

The advent of Artificial Intelligence (AI) is fostering the development of innovative methods of communication and collaboration. Integrating AI into Information and Communication Technologies (ICTs) is now ushering in an era of social progress that has the potential to empower marginalized groups. This transformation paves the way to a digital inclusion that could qualitatively empower the online presence of women, particularly in conservative and male-dominated regions. To explore this possibility, we investigated the effect of integrating conversational agents into online debates encompassing 240 Afghans discussing the fall of Kabul in August 2021. We found that the agent leads to quantitative differences in how both genders contribute to the debate by raising issues, presenting ideas, and articulating arguments. We also found increased ideation and reduced inhibition for both genders, particularly females, when interacting exclusively with other females or the agent. The enabling character of the conversational agent reveals an apparatus that could empower women and increase their agency on online platforms.


Assuntos
Inteligência Artificial , Comunicação , Humanos , Feminino , Masculino , Inibição Psicológica , Processos Mentais
5.
Audit Percept Cogn ; 6(1-2): 97-107, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37260602

RESUMO

Introduction: Orofacial somatosensory inputs modify the perception of speech sounds. Such auditory-somatosensory integration likely develops alongside speech production acquisition. We examined whether the somatosensory effect in speech perception varies depending on individual characteristics of speech production. Methods: The somatosensory effect in speech perception was assessed by changes in category boundary between /e/ and /ø/ in a vowel identification test resulting from somatosensory stimulation providing facial skin deformation in the rearward direction corresponding to articulatory movement for /e/ applied together with the auditory input. Speech production performance was quantified by the acoustic distances between the average first, second and third formants of /e/ and /ø/ utterances recorded in a separate test. Results: The category boundary between /e/ and /ø/ was significantly shifted towards /ø/ due to the somatosensory stimulation which is consistent with previous research. The amplitude of the category boundary shift was significantly correlated with the acoustic distance between the mean second - and marginally third - formants of /e/ and /ø/ productions, with no correlation with the first formant distance. Discussion: Greater acoustic distances can be related to larger contrasts between the articulatory targets of vowels in speech production. These results suggest that the somatosensory effect in speech perception can be linked to speech production performance.

6.
Sensors (Basel) ; 22(20)2022 Oct 19.
Artigo em Inglês | MEDLINE | ID: mdl-36298305

RESUMO

Intelligent transportation systems encompass a series of technologies and applications that exchange information to improve road traffic and avoid accidents. According to statistics, some studies argue that human mistakes cause most road accidents worldwide. For this reason, it is essential to model driver behavior to improve road safety. This paper presents a Fuzzy Rule-Based System for driver classification into different profiles considering their behavior. The system's knowledge base includes an ontology and a set of driving rules. The ontology models the main entities related to driver behavior and their relationships with the traffic environment. The driving rules help the inference system to make decisions in different situations according to traffic regulations. The classification system has been integrated on an intelligent transportation architecture. Considering the user's driving style, the driving assistance system sends them recommendations, such as adjusting speed or choosing alternative routes, allowing them to prevent or mitigate negative transportation events, such as road crashes or traffic jams. We carry out a set of experiments in order to test the expressiveness of the ontology along with the effectiveness of the overall classification system in different simulated traffic situations. The results of the experiments show that the ontology is expressive enough to model the knowledge of the proposed traffic scenarios, with an F1 score of 0.9. In addition, the system allows proper classification of the drivers' behavior, with an F1 score of 0.84, outperforming Random Forest and Naive Bayes classifiers. In the simulation experiments, we observe that most of the drivers who are recommended an alternative route experience an average time gain of 66.4%, showing the utility of the proposal.


Assuntos
Acidentes de Trânsito , Condução de Veículo , Humanos , Acidentes de Trânsito/prevenção & controle , Teorema de Bayes , Meios de Transporte , Simulação por Computador
7.
Front Psychol ; 13: 839087, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35558689

RESUMO

Orofacial somatosensory inputs may play a role in the link between speech perception and production. Given the fact that speech motor learning, which involves paired auditory and somatosensory inputs, results in changes to speech perceptual representations, somatosensory inputs may also be involved in learning or adaptive processes of speech perception. Here we show that repetitive pairing of somatosensory inputs and sounds, such as occurs during speech production and motor learning, can also induce a change of speech perception. We examined whether the category boundary between /ε/ and /a/ was changed as a result of perceptual training with orofacial somatosensory inputs. The experiment consisted of three phases: Baseline, Training, and Aftereffect. In all phases, a vowel identification test was used to identify the perceptual boundary between /ε/ and /a/. In the Baseline and the Aftereffect phase, an adaptive method based on the maximum-likelihood procedure was applied to detect the category boundary using a small number of trials. In the Training phase, we used the method of constant stimuli in order to expose participants to stimulus variants which covered the range between /ε/ and /a/ evenly. In this phase, to mimic the sensory input that accompanies speech production and learning in an experimental group, somatosensory stimulation was applied in the upward direction when the stimulus sound was presented. A control group (CTL) followed the same training procedure in the absence of somatosensory stimulation. When we compared category boundaries prior to and following paired auditory-somatosensory training, the boundary for participants in the experimental group reliably changed in the direction of /ε/, indicating that the participants perceived /a/ more than /ε/ as a consequence of training. In contrast, the CTL did not show any change. Although a limited number of participants were tested, the perceptual shift was reduced and almost eliminated 1 week later. Our data suggest that repetitive exposure of somatosensory inputs in a task that simulates the sensory pairing which occurs during speech production, changes perceptual system and supports the idea that somatosensory inputs play a role in speech perceptual adaptation, probably contributing to the formation of sound representations for speech perception.

8.
PLoS One ; 16(12): e0260859, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34874970

RESUMO

Musicians tend to have better auditory and motor performance than non-musicians because of their extensive musical experience. In a previous study, we established that loudness discrimination acuity is enhanced when sound is produced by a precise force generation task. In this study, we compared the enhancement effect between experienced pianists and non-musicians. Without the force generation task, loudness discrimination acuity was better in pianists than non-musicians in the condition. However, the force generation task enhanced loudness discrimination acuity similarly in both pianists and non-musicians. The reaction time was also reduced with the force control task, but only in the non-musician group. The results suggest that the enhancement of loudness discrimination acuity with the precise force generation task is independent of musical experience and is, therefore, a fundamental function in auditory-motor interaction.


Assuntos
Estimulação Acústica , Cognição/fisiologia , Aprendizagem por Discriminação/fisiologia , Percepção Sonora , Música/psicologia , Desempenho Psicomotor , Som , Adulto , Humanos , Tempo de Reação , Adulto Jovem
9.
Cortex ; 143: 195-204, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34450567

RESUMO

Recent studies have demonstrated that the auditory speech perception of a listener can be modulated by somatosensory input applied to the facial skin suggesting that perception is an embodied process. However, speech perception is a multisensory process involving both the auditory and visual modalities. It is unknown whether and to what extent somatosensory stimulation to the facial skin modulates audio-visual speech perception. If speech perception is an embodied process, then somatosensory stimulation applied to the perceiver should influence audio-visual speech processing. Using the McGurk effect (the perceptual illusion that occurs when a sound is paired with the visual representation of a different sound, resulting in the perception of a third sound) we tested the prediction using a simple behavioral paradigm and at the neural level using event-related potentials (ERPs) and their cortical sources. We recorded ERPs from 64 scalp sites in response to congruent and incongruent audio-visual speech randomly presented with and without somatosensory stimulation associated with facial skin deformation. Subjects judged whether the production was /ba/ or not under all stimulus conditions. In the congruent audio-visual condition subjects identifying the sound as /ba/, but not in the incongruent condition consistent with the McGurk effect. Concurrent somatosensory stimulation improved the ability of participants to more correctly identify the production as /ba/ relative to the non-somatosensory condition in both congruent and incongruent conditions. ERP in response to the somatosensory stimulation for the incongruent condition reliably diverged 220 msec after stimulation onset. Cortical sources were estimated around the left anterior temporal gyrus, the right middle temporal gyrus, the right posterior superior temporal lobe and the right occipital region. The results demonstrate a clear multisensory convergence of somatosensory and audio-visual processing in both behavioral and neural processing consistent with the perspective that speech perception is a self-referenced, sensorimotor process.


Assuntos
Percepção da Fala , Fala , Estimulação Acústica , Percepção Auditiva , Humanos , Estimulação Luminosa , Percepção Visual
10.
J Syst Sci Syst Eng ; 30(4): 450-464, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34054250

RESUMO

Artificial Intelligence is revolutionising our communication practices and the ways in which we interact with each other. This revolution does not only impact how we communicate, but it affects the nature of the partners with whom we communicate. Online discussion platforms now allow humans to communicate with artificial agents in the form of socialbots. Such agents have the potential to moderate online discussions and even manipulate and alter public opinions. In this paper, we propose to study this phenomenon using a constructed large-scale agent platform. At the heart of the platform lies an artificial agent that can moderate online discussions using argumentative messages. We investigate the influence of the agent on the evolution of an online debate involving human participants. The agent will dynamically react to their messages by moderating, supporting, or attacking their stances. We conducted two experiments to evaluate the platform while looking at the effects of the conversational agent. The first experiment is a large-scale discussion with 1076 citizens from Afghanistan discussing urban policy-making in the city of Kabul. The goal of the experiment was to increase the citizen involvement in implementing Sustainable Development Goals. The second experiment is a small-scale debate between a group of 16 students about globalisation and taxation in Myanmar. In the first experiment, we found that the agent improved the responsiveness of the participants and increased the number of identified ideas and issues. In the second experiment, we found that the agent polarised the debate by reinforcing the initial stances of the participant.

11.
Exp Brain Res ; 239(4): 1141-1149, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33555383

RESUMO

Motor executions alter sensory processes. Studies have shown that loudness perception changes when a sound is generated by active movement. However, it is still unknown where and how the motor-related changes in loudness perception depend on the task demand of motor execution. We examined whether different levels of precision demands in motor control affects loudness perception. We carried out a loudness discrimination test, in which the sound stimulus was produced in conjunction with the force generation task. We tested three target force amplitude levels. The force target was presented on a monitor as a fixed visual target. The generated force was also presented on the same monitor as a movement of the visual cursor. Participants adjusted their force amplitude in a predetermined range without overshooting using these visual targets and moving cursor. In the control condition, the sound and visual stimuli were generated externally (without a force generation task). We found that the discrimination performance was significantly improved when the sound was produced by the force generation task compared to the control condition, in which the sound was produced externally, although we did not find that this improvement in discrimination performance changed depending on the different target force amplitude levels. The results suggest that the demand for precise control to produce a fixed amount of force may be key to obtaining the facilitatory effect of motor execution in auditory processes.


Assuntos
Percepção Sonora , Som , Estimulação Acústica , Cognição , Humanos , Movimento
12.
J Neurosurg Case Lessons ; 2(2): CASE21209, 2021 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-35854861

RESUMO

BACKGROUND: Pelvic deformity after resection of malignant pelvic tumors causes scoliosis. Although the central sacral vertical line (CSVL) is often used to evaluate the coronal alignment and determine the treatment strategy for scoliosis, it is not clear whether the CSVL is a suitable coronal reference axis in cases with pelvic deformity. This report proposes a new coronal reference axis for use in cases with pelvic deformity and discusses the pathologies of spinal deformity remaining after revision surgery. OBSERVATIONS: A 14-year-old boy who had undergone internal hemipelvectomy and pelvic ring reconstruction 2 years prior was referred to our hospital with severe back pain. His physical and radiographic examinations revealed severe scoliosis with pelvic deformity. The authors planned a surgical strategy based on the CSVL and performed pelvic ring reconstruction using free vascularized fibula graft and spinopelvic fixation from L5 to the pelvis. After the procedure, although the patient's back pain was relieved, his scoliosis persisted. At the latest follow-up, his spinal deformity correction was acceptable with corset bracing. Therefore, the authors did not perform additional surgeries. LESSONS: The CSVL may not be appropriate for evaluating coronal alignment in cases with pelvic deformity. Accurate preoperative planning is required to correct spinal deformities with pelvic deformity.

13.
Cartilage ; 13(1_suppl): 1487S-1493S, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-32493051

RESUMO

OBJECTIVE: This study aimed to evaluate variations in anterior condylar height (ACH) of the distal femur in varus knee osteoarthritis and evaluate the association between ACH and knee flexion. DESIGN: Computed tomography (CT) images of 171 knees (143 patients; age 73.7 ± 8.3 years; 132 females, 39 males) with symptomatic primary knee osteoarthritis and varus alignment undergoing primary total knee arthroplasty, unilateral knee arthroplasty, or high tibial osteotomy were evaluated. Several other anatomic parameters were measured on CT or radiography. The ACH and knee flexion correlation was analyzed, and factors contributing to knee flexion were determined using multivariable regression analysis. RESULTS: Medial ACH (mean, 8.1 mm; range, -2.8 to 19.9 mm) was smaller (P < 0.001) but more variable (F = 1.8, P < 0.001) than lateral ACH (mean, 10.7 mm; range, 3.6-18.3 mm). Medial ACH was moderately correlated with flexion (r = -0.44, 95% confidence interval [CI], -0.55 to -0.32), whereas lateral ACH was weakly correlated (r = -0.38; 95% CI, -0.50 to -0.25). On multivariable linear regression analysis of knee flexion, body mass index (B [partial regression coefficient] = -1.1), patellofemoral Kellgren-Lawrence grade (B = -4.3), medial ACH (B = -1.2), medial posterior condylar offset (B = 1.2), age (B = -0.4), and varus alignment (B = -0.6) remained significant independent variables (adjusted R2 = 0.35). CONCLUSIONS: Wide variation and anteriorization of the anterior condyle of the distal femur was observed in advanced osteoarthritis, as an independent determinant of limited knee flexion.


Assuntos
Ligamentos Colaterais , Fêmur/diagnóstico por imagem , Osteoartrite do Joelho/diagnóstico por imagem , Amplitude de Movimento Articular/fisiologia , Idoso , Idoso de 80 Anos ou mais , Artroplastia do Joelho , Ligamentos Colaterais/diagnóstico por imagem , Feminino , Fêmur/cirurgia , Humanos , Articulação do Joelho/diagnóstico por imagem , Articulação do Joelho/cirurgia , Masculino , Osteoartrite do Joelho/cirurgia , Tomografia Computadorizada por Raios X
14.
J Acoust Soc Am ; 148(3): EL279, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-33003866

RESUMO

Somatosensory stimulation associated with facial skin deformation has been developed and efficiently applied in the study of speech production and speech perception. However, the technique is limited to a simplified unidirectional pattern of stimulation, and cannot adapt to realistic stimulation patterns related to multidimensional orofacial gestures. To overcome this issue, a new multi-actuator system is developed enabling one to synchronously deform the facial skin in multiple directions. The first prototype involves stimulation in two directions and its efficiency is evaluated using a temporal order judgement test involving vertical and horizontal facial skin stretches at the sides of the mouth.


Assuntos
Percepção da Fala , Fala , Face , Gestos , Boca
15.
J Neurophysiol ; 124(4): 1103-1109, 2020 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-32902327

RESUMO

Speech learning requires precise motor control, but it likewise requires transient storage of information to enable the adjustment of upcoming movements based on the success or failure of previous attempts. The contribution of somatic sensory memory for limb position has been documented in work on arm movement; however, in speech, the sensory support for speech production comes from both somatosensory and auditory inputs, and accordingly sensory memory for either or both of sounds and somatic inputs might contribute to learning. In the present study, adaptation to altered auditory feedback was used as an experimental model of speech motor learning. Participants also underwent tests of both auditory and somatic sensory memory. We found that although auditory memory for speech sounds is better than somatic memory for speechlike facial skin deformations, somatic sensory memory predicts adaptation, whereas auditory sensory memory does not. Thus even though speech relies substantially on auditory inputs and in the present manipulation adaptation requires the minimization of auditory error, it is somatic inputs that provide the memory support for learning.NEW & NOTEWORTHY In speech production, almost everyone achieves an exceptionally high level of proficiency. This is remarkable because speech involves some of the smallest and most carefully timed movements of which we are capable. The present paper demonstrates that sensory memory contributes to speech motor learning. Moreover, we report the surprising result that somatic sensory memory predicts speech motor learning, whereas auditory memory does not.


Assuntos
Memória , Destreza Motora , Fala , Adolescente , Adulto , Feminino , Humanos , Masculino , Percepção da Fala , Percepção Visual
16.
J Neurophysiol ; 123(6): 2491-2503, 2020 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-32432505

RESUMO

The human tongue is atypical as a motor system since its movement is determined by deforming its soft tissues via muscles that are in large part embedded in it (muscular hydrostats). However, the neurophysiological mechanisms enabling fine tongue motor control are not well understood. We investigated sensorimotor control mechanisms of the tongue through a perturbation experiment. A mechanical perturbation was applied to the tongue during the articulation of three vowels (/i/, /e/, /ε/) under conditions of voicing, whispering, and posturing. Tongue movements were measured at three surface locations in the sagittal plane using electromagnetic articulography. We found that the displacement induced by the external force was quickly compensated for. Individual sensors did not return to their original positions but went toward a position on the original tongue contour for that vowel. The amplitude of compensatory response at each tongue site varied systematically according to the articulatory condition. A mathematical simulation that included reflex mechanisms suggested that the observed compensatory response can be attributed to a reflex mechanism, rather than passive tissue properties. The results provide evidence for the existence of quick compensatory mechanisms in the tongue that may be dependent on tunable reflexes. The tongue posture for vowels could be regulated in relation to the shape of the tongue contour, rather than to specific positions for individual tissue points.NEW & NOTEWORTHY This study presents evidence of quick compensatory mechanisms in tongue motor control for speech production. The tongue posture is controlled not in relation to a specific tongue position, but to the shape of the tongue contour to achieve specific speech sounds. Modulation of compensatory responses due to task demands and mathematical simulations support the idea that the quick compensatory response is driven by a reflex mechanism.


Assuntos
Atividade Motora/fisiologia , Postura/fisiologia , Reflexo/fisiologia , Fala/fisiologia , Língua/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
17.
Neurosci Lett ; 730: 135045, 2020 06 21.
Artigo em Inglês | MEDLINE | ID: mdl-32413541

RESUMO

Modulation of auditory activity occurs before and during voluntary speech movement. However, it is unknown whether orofacial somatosensory input is modulated in the same manner. The current study examined whether or not the somatosensory event-related potentials (ERPs) in response to facial skin stretch are changed during speech and nonspeech production tasks. Specifically, we compared ERP changes to somatosensory stimulation for different orofacial postures and speech utterances. Participants produced three different vowel sounds (voicing) or non-speech oral tasks in which participants maintained a similar posture without voicing. ERP's were recorded from 64 scalp sites in response to the somatosensory stimulation under six task conditions (three vowels × voicing/posture) and compared to a resting baseline condition. The first negative peak for the vowel /u/ was reliably reduced from the baseline in both the voicing and posturing tasks, but the other conditions did not differ. The second positive peak was reduced for all voicing tasks compared to the posturing tasks. The results suggest that the sensitivity of somatosensory ERP to facial skin deformation is modulated by the task and that somatosensory processing during speaking may be modulated differently relative to phonetic identity.


Assuntos
Potenciais Evocados/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Voz/fisiologia , Estimulação Acústica/métodos , Eletroencefalografia/métodos , Humanos , Fonética , Córtex Somatossensorial/fisiologia
18.
Biol Pharm Bull ; 43(4): 731-735, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32238715

RESUMO

Alzheimer's disease (AD) is characterized by the formation of extracellular amyloid plaques containing the amyloid ß-protein (Aß) within the parenchyma of the brain. Aß is considered to be the key pathogenic factor of AD. Recently, we showed that Angiotensin II type 1 receptor (AT1R), which regulates blood pressure, is involved in Aß production, and that telmisartan (Telm), which is an angiotensin II receptor blocker (ARB), increased Aß production via AT1R. However, the precise mechanism underlying how AT1R is involved in Aß production is unknown. Interestingly, AT1R, a G protein-coupled receptor, was strongly suggested to be involved in signal transduction by heterodimerization with ß2-adrenergic receptor (ß2-AR), which is also shown to be involved in Aß generation. Therefore, in this study, we aimed to clarify whether the interaction between AT1R and ß2-AR is involved in the regulation of Aß production. To address this, we analyzed whether the increase in Aß production by Telm treatment is affected by ß-AR antagonist using fibroblasts overexpressing amyloid precursor protein (APP). We found that the increase in Aß production by Telm treatment was decreased by the treatment of ß2-AR selective antagonist ICI-118551 more strongly than the treatment of ß1-AR selective antagonists. Furthermore, deficiency of AT1R abolished the effect of ß2-AR antagonist on the stimulation of Aß production caused by Telm. Taken together, the interaction between AT1R and ß2-AR is likely to be involved in Aß production.


Assuntos
Peptídeos beta-Amiloides/metabolismo , Receptor Tipo 1 de Angiotensina/metabolismo , Receptores Adrenérgicos beta 2/metabolismo , Antagonistas Adrenérgicos beta/farmacologia , Bloqueadores do Receptor Tipo 1 de Angiotensina II/farmacologia , Animais , Atenolol/farmacologia , Bisoprolol/farmacologia , Células Cultivadas , Camundongos Endogâmicos C57BL , Propanolaminas/farmacologia , Propranolol/farmacologia , Telmisartan/farmacologia
19.
Cognition ; 197: 104163, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31891832

RESUMO

There is accumulating evidence that articulatory/motor knowledge plays a role in phonetic processing, such as the recent finding that orofacial somatosensory inputs may influence phoneme categorization. We here show that somatosensory inputs also contribute at a higher level of the speech perception chain, that is, in the context of word segmentation and lexical decision. We carried out an auditory identification test using a set of French phrases consisting of a definite article "la" followed by a noun, which may be segmented differently according to the placement of accents within the phrase. Somatosensory stimulation was applied to the facial skin at various positions within the acoustic utterances corresponding to these phrases, which had been recorded with neutral accent, that is, with all syllables given similar emphasis. We found that lexical decisions reflecting word segmentation were significantly and systematically biased depending on the timing of somatosensory stimulation. This bias was not induced when somatosensory stimulation was applied to the skin other than on the face. These results provide evidence that the orofacial somatosensory system contributes to lexical perception in situations that would be disambiguated by different articulatory movements, and suggests that articulatory/motor knowledge might be involved in speech segmentation.


Assuntos
Percepção da Fala , Fala , Face , Humanos , Fonética
20.
Front Hum Neurosci ; 13: 344, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31636554

RESUMO

Multisensory integration (MSI) allows us to link sensory cues from multiple sources and plays a crucial role in speech development. However, it is not clear whether humans have an innate ability or whether repeated sensory input while the brain is maturing leads to efficient integration of sensory information in speech. We investigated the integration of auditory and somatosensory information in speech processing in a bimodal perceptual task in 15 young adults (age 19-30) and 14 children (age 5-6). The participants were asked to identify if the perceived target was the sound /e/ or /ø/. Half of the stimuli were presented under a unimodal condition with only auditory input. The other stimuli were presented under a bimodal condition with both auditory input and somatosensory input consisting of facial skin stretches provided by a robotic device, which mimics the articulation of the vowel /e/. The results indicate that the effect of somatosensory information on sound categorization was larger in adults than in children. This suggests that integration of auditory and somatosensory information evolves throughout the course of development.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...